6 - Artificial Intelligence II [ID:57506]
50 von 1591 angezeigt

Okay, looks like the quiz worked for you.

That true?

Okay.

Let me remind you where we are in the big scope of things.

We are talking about agents, model-based agents, where the state of the world is actually approximated as a set of possible states with a probability distribution over them,

because we're interested in modeling how likely certain states are.

We are in a possibly only partially observable world, which means the way we model the world is by a set of random variables.

Remember random variables, we can understand in two ways.

One is as functions from the sample space and blah-de-blah-de-blah, that's the math.

Or we can think of them as features of the world that we can observe and that we want to know the likelihoods of.

And we've kind of done the math and eventually came up with this representation, which we call Bayesian networks.

Bayesian networks, kind of that was our example, the alarm example, which had five random variables.

There are typical models of the world, have many, many, many more of those things.

And you should realize that we're only talking about world models right now.

We're not talking about it, but we might actually be reaching that today or tomorrow, is how we actually take, how we actually make decisions about actions.

That's not something we're doing yet. We're just modeling the world.

We're dealing with sensors. We're dealing with partial observability and so on.

And these models, and that was what we talked about last time, was these models can be different.

There's a quality criterion to the models and the models with fewer arrows, in particular,

that somehow got me completely off. I don't know where this came from.

The good ones look like this and the bad ones we constructed look like that.

Essentially, in the computation, in the Bayesian inference, we have to look at the number of incoming arrows.

Incoming arrows kind of correspond to the dimensionality of the conditional probability table and the sizes of sums and so on that is going in here.

So this is much worse than, say, that.

And that is, again, much worse than the kind of X-shaped model that we had before.

So there's a quality difference in the models.

So it is, when you think of yourselves as the agent designers, it pays off to do the right graph.

And the kind of cooking recipe that we came up with is you kind of incrementally build the graph, putting causes before effects.

And if you don't do it correctly, then you come up with something like this.

When you do it correctly, you come up with this X-shaped thing where you have the causes, namely earthquake and burglary somewhere up there.

They cause the alarm, which calls John and Mary to call.

And that kind of gives you the smallest set of conditional probability tables.

And that makes computation easiest.

Today I would like to look at algorithms.

We have the basic summation, marginalization, normalization of product or chain rule products way of doing things.

And it turns out we can actually do better.

And that's what I would like to look at today.

But before we do so, I would like to remind you that very often parts of the Bayesian networks are very simple if some node is deterministic.

Things are deterministic if there's a crisp, non-stochastic logic behind it.

Things like EU citizenship is defined to be German or French or Slovenian or 24 others.

There's nothing probabilistic about it.

And there are other things that are like that.

For instance, we looked at inhibited disjunctions, which instead of having kind of full generality conditional probability tables,

you basically can make do with a couple of inhibition factors, which again, are very simple.

With a couple of inhibition factors, which again gives you things with fewer numbers, which means less computation, which also means less modeling.

Right.

So you want to, if you design an agent, you design the set of variables first, then you design the graph,

and then you design the conditional probability tables, trying to get by with as few numbers as possible.

And if you do that well, then things become tractable.

How tractable is something we're going to look at now.

Teil einer Videoserie :

Zugänglich über

Offener Zugang

Dauer

01:20:10 Min

Aufnahmedatum

2025-05-13

Hochgeladen am

2025-05-14 13:09:09

Sprache

en-US

Einbetten
Wordpress FAU Plugin
iFrame
Teilen